Add support for FLUX.2 LOKR models#88
Conversation
lstein
left a comment
There was a problem hiding this comment.
When loading the LOKR file at https://civitai.com/models/1972981/sex-nudes-other-fun-stuff-snofs, the file is identified correctly as Flux.2 Klein, but I am getting this stack trace on generation:
[2026-02-24 09:20:47,601]::[InvokeAI]::ERROR --> Error while invoking session 80cad9d2-b9c2-41aa-a831-7b94e70b2c92, invocation 246fa67e-049a-45b2-bd0e-b18bf432a56d (flux2_klein_text_encoder): Unsupported lora format: dict_keys(['proj.alpha', 'proj.lokr_w1', 'proj.lokr_w2', 'qkv.alpha', 'qkv.lokr_w1', 'qkv.lokr_w2'])
[2026-02-24 09:20:47,601]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/services/session_processor/session_processor_default.py", line 130, in run_node
output = invocation.invoke_internal(context=context, services=self._services)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/invocations/baseinvocation.py", line 244, in invoke_internal
output = self.invoke(context)
^^^^^^^^^^^^^^^^^^^^
File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/invocations/flux2_klein_text_encoder.py", line 76, in invoke
qwen3_embeds, pooled_embeds = self._encode_prompt(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/invocations/flux2_klein_text_encoder.py", line 116, in _encode_prompt
exit_stack.enter_context(
File "/home/lstein/.local/share/uv/python/cpython-3.12.12-linux-x86_64-gnu/lib/python3.12/contextlib.py", line 526, in enter_context
result = _enter(cm)
^^^^^^^^^^
File "/home/lstein/.local/share/uv/python/cpython-3.12.12-linux-x86_64-gnu/lib/python3.12/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/home/lstein/Projects/InvokeAI-lstein/invokeai/backend/patches/layer_patcher.py", line 39, in apply_smart_model_patches
for patch, patch_weight in patches:
^^^^^^^
File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/invocations/flux2_klein_text_encoder.py", line 216, in _lora_iterator
lora_info = context.models.load(lora.lora)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/services/shared/invocation_context.py", line 392, in load
return self._services.model_manager.load.load_model(model, submodel_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/services/model_load/model_load_default.py", line 71, in load_model
).load_model(model_config, submodel_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lstein/Projects/InvokeAI-lstein/invokeai/backend/model_manager/load/load_default.py", line 59, in load_model
cache_record = self._load_and_cache(model_config, submodel_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lstein/Projects/InvokeAI-lstein/invokeai/backend/model_manager/load/load_default.py", line 104, in _load_and_cache
loaded_model = self._load_model(config, submodel_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lstein/Projects/InvokeAI-lstein/invokeai/backend/model_manager/load/model_loaders/lora.py", line 136, in _load_model
model = lora_model_from_flux_aitoolkit_state_dict(state_dict=state_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lstein/Projects/InvokeAI-lstein/invokeai/backend/patches/lora_conversions/flux_aitoolkit_lora_conversion_utils.py", line 79, in lora_model_from_flux_aitoolkit_state_dict
layers[FLUX_LORA_TRANSFORMER_PREFIX + layer_key] = any_lora_layer_from_state_dict(layer_state_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lstein/Projects/InvokeAI-lstein/invokeai/backend/patches/layers/utils.py", line 35, in any_lora_layer_from_state_dict
raise ValueError(f"Unsupported lora format: {state_dict.keys()}")
ValueError: Unsupported lora format: dict_keys(['proj.alpha', 'proj.lokr_w1', 'proj.lokr_w2', 'qkv.alpha', 'qkv.lokr_w1', 'qkv.lokr_w2'])
Fixed in 43f65b5. The model was being misidentified as AIToolkit format because The fix adds an early exclusion in |
lstein
left a comment
There was a problem hiding this comment.
The test LOKR is now recognized correctly and renders. However, I am seeing a lot of these warnings:
[2026-02-24 09:40:54,404]::[InvokeAI]::WARNING --> Unexpected keys found in LoRA/LyCORIS layer, model might work incorrectly! Unexpected keys: {'alpha'}
Fixed in e7681ee. The warning came from |
Co-authored-by: lstein <111189+lstein@users.noreply.github.com> Fix BFL LOKR models being misidentified as AIToolkit format Co-authored-by: lstein <111189+lstein@users.noreply.github.com> Fix alpha key warning in LOKR QKV split layers Co-authored-by: lstein <111189+lstein@users.noreply.github.com>
48295b8 to
b646b2c
Compare
Summary
Adds support for LOKR (Kronecker product LoRA) models designed for FLUX.2 Klein 4B/9B. InvokeAI now correctly recognizes the base model and variant for these models and can generate images using them.
Model detection changes (
invokeai/backend/model_manager/configs/lora.py):_lokr_in_dim()and_lokr_out_dim()helpers to compute LOKR layer dimensions from Kronecker product tensors_is_flux2_lora_state_dict()to detect FLUX.2 Klein shapes from LOKR keys in BFL and Kohya formats_get_flux2_lora_variant()to detect Klein 4B vs 9B variant from LOKR key shapes_validate_looks_like_lora()heuristic to include LOKR and LoHA key suffixesModel loading changes (
invokeai/backend/patches/lora_conversions/flux_bfl_peft_lora_conversion_utils.py):is_state_dict_likely_in_flux_bfl_peft_format()to accept LyCORIS algorithm suffixes (lokr_w1,lokr_w2,hada_w1_a, etc.) in addition to PEFT suffixes_split_bfl_key()helper to correctly parse keys with single-component suffixes (e.g.lokr_w1) vs two-component suffixes (e.g.lora_A.weight)lora_model_from_flux_bfl_peft_state_dict()andlora_model_from_flux2_bfl_peft_state_dict()to use_split_bfl_key()_split_qkv_lokr()for FLUX.2 Klein LOKR QKV layer splitting — computes the full Kronecker product weight and splits it into separate Q/K/V full-weight (diff) layers; for factorized LOKR thealpha/rankscale is baked into the weight sinceFullLayeralways uses scale=1.0_convert_bfl_layer_to_diffusers()to dispatch QKV splits for both LoRA and LOKR layer typesBug fixes:
invokeai/backend/patches/lora_conversions/flux_aitoolkit_lora_conversion_utils.py) Fixed a crash on generation where BFL-format LOKR models were misidentified as AIToolkit-format LoRAs. The AIToolkit detector matched any model with adiffusion_model.double_blocks.prefix (when no metadata is present), and ran before the BFL PEFT detector. Added an exclusion: if any key ends with LyCORIS-specific suffixes (lokr_w1,lokr_w2,hada_w1_a, etc.), the model is not AIToolkit format.invokeai/backend/patches/lora_conversions/flux_bfl_peft_lora_conversion_utils.py) Fixed spuriousUnexpected keys: {'alpha'}warnings during generation. The_split_qkv_lokr()function was incorrectly passingalphatoFullLayer.from_state_dict_values(), which only handles{"diff", "diff_b"}.Related Issues / Discussions
QA Instructions
flux2base with the correct Klein variant (4B or 9B)Unexpected keyswarnings in the logMerge Plan
Checklist
What's Newcopy (if doing a release after this PR)Original prompt
✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.